skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "DeVore, R"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. DeVore, R; Kunoth, A (Ed.)
    We determine the best n-term approximation of generalized Wiener model classes in a Hilbert space H. This theory is then applied to several special cases 
    more » « less
    Free, publicly-accessible full text available December 4, 2025
  2. DeVore, R; Kunoth, A (Ed.)
    We construct uniformly bounded solutions of the equation div u = f for arbitrary data f in the critical spaces Ld(Ω), where Ω is a domain of Rd. This question was addressed by Bourgain & Brezis, [BB2003], who proved that although the problem has a uniformly bounded solution, it is critical in the sense that there exists no linear solution operator for general Ld-data. We first discuss the validity of this existence result under weaker conditions than f ∈ Ld(Ω), and then focus our work on constructive processes for such uniformly bounded solutions. In the d = 2 case, we present a direct one-step explicit construction, which generalizes for d > 2 to a (d − 1)-step construction based on induction. An explicit construction is proposed for compactly supported data in L2,∞(Ω) in the d = 2 case. We also present constructive approaches based on optimization of a certain loss functional adapted to the problem. This approach provides a two-step construction in the d = 2 case. This optimization is used as the building block of a hierarchical multistep process introduced in [Tad2014] that converges to a solution in more general situations. 
    more » « less
    Free, publicly-accessible full text available December 4, 2025
  3. Free, publicly-accessible full text available December 4, 2025
  4. We consider the problem of numerically approximating the solutions to a partial differential equation (PDE) when there is insufficient information to determine a unique solution. Our main example is the Poisson boundary value problem, when the boundary data is unknown and instead one observes finitely many linear measurements of the solution. We view this setting as an optimal recovery problem and develop theory and numerical algorithms for its solution. The main vehicle employed is the derivation and approximation of the Riesz representers of these functionals with respect to relevant Hilbert spaces of harmonic functions. 
    more » « less
  5. This paper studies the problem of learning an unknown function f from given data about f. The learning problem is to give an approximation f^* to f that predicts the values of f away from the data. There are numerous settings for this learning problem depending on (i) what additional information we have about f (known as a model class assumption), (ii) how we measure the accuracy of how well f^* predicts f, (iii) what is known about the data and data sites,(iv) whether the data observations are polluted by noise. A mathematical description of the optimal performance possible (the smallest possible error of recovery) is known in the presence of a model class assumption. Under standard model class assumptions, it is shown in this paper that a near optimal f^* can be found by solving a certain discrete over-parameterized optimization problem with a penalty term. Here, near optimal means that the error is bounded by a fixed constant times the optimal error. This explains the advantage of over-parameterization which is commonly used in modern machine learning. The main results of this paper prove that over-parameterized learning with an appropriate loss function gives a near optimal approximation f^* of the function f from which the data is collected. Quantitative bounds are given for how much over-parameterization needs to be employed and how the penalization needs to be scaled in order to guarantee a near optimal recovery off. An extension of these results to the case where the data is polluted by additive deterministic noise is also given. 
    more » « less
  6. Rebollo, Tomás C.; Donat, Rosa; Higueras, Inmaculada (Ed.)
    The exploration of complex physical or technological processes usually requires exploiting available information from different sources: (i) physical laws often represented as a family of parameter dependent partial differential equations and (ii) data provided by measurement devices or sensors. The amount of sensors is typically limited and data acquisition may be expensive and in some cases even harmful. This article reviews some recent developments for this “small-data” scenario where inversion is strongly aggravated by the typically large parametric dimension- ality. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic accuracy quantification related to the required computational complexity. We discuss optimality criteria which delineate intrinsic information limits, and highlight the role of reduced models for developing efficient computational strategies. In particular, the need to adapt the reduced models—not to a specific (possibly noisy) data set but rather to the sensor system—is a central theme. This, in turn, is facilitated by exploiting geometric perspectives based on proper stable variational formulations of the continuous model. 
    more » « less
  7. null (Ed.)
  8. null (Ed.)